Goto

Collaborating Authors

 privacy issue


Unesco adopts global standards on 'wild west' field of neurotechnology

The Guardian

The Unesco standards define a new category of data, 'neural data', and suggest guidelines governing its protection. The Unesco standards define a new category of data, 'neural data', and suggest guidelines governing its protection. Unesco adopts global standards on'wild west' field of neurotechnology UN body's recommendations driven by AI advances and proliferation of consumer-oriented neurotech devices It is the latest move in a growing international effort to put guardrails around a burgeoning frontier - technologies that harness data from the brain and nervous system. Unesco has adopted a set of global standards on the ethics of neurotechnology, a field that has been described as "a bit of a wild west". "There is no control," said Unesco's chief of bioethics, Dafna Feinholz.


mCaptcha: Replacing Captchas with Rate Limiters to Improve Security and Accessibility

Communications of the ACM

For many years, publicly accessible Web applications have been protecting their services from bots and scripts by asking users to solve captchas (Completely Automated Public Turing tests to tell Computers and Humans Apart), puzzles designed to be challenging for machines to solve yet simple for humans, such as clicking on certain locations in an image or recognizing elongated characters or digits. Designed to stop robotic assaults like spamming, data scraping, and brute-force login attempts,1 captchas act as a security precaution to determine whether a user is a human or a software program. Captcha techniques are employed in many different areas, including e-transactions, entering a website's secure areas, gathering email signups, and ensuring that only humans vote when conducting polls and surveys. They are also used to hinder attackers and spammers from injecting malicious software into online registration forms. As such, captchas are also employed as a line of defense against threats such as DDoS attacks, dictionary attacks, malvertising, and botnet and spam attacks.


A Decade of Privacy-Relevant Android App Reviews: Large Scale Trends

Akgul, Omer, Peddinti, Sai Teja, Taft, Nina, Mazurek, Michelle L., Harkous, Hamza, Srivastava, Animesh, Seguin, Benoit

arXiv.org Artificial Intelligence

We present an analysis of 12 million instances of privacy-relevant reviews publicly visible on the Google Play Store that span a 10 year period. By leveraging state of the art NLP techniques, we examine what users have been writing about privacy along multiple dimensions: time, countries, app types, diverse privacy topics, and even across a spectrum of emotions. We find consistent growth of privacy-relevant reviews, and explore topics that are trending (such as Data Deletion and Data Theft), as well as those on the decline (such as privacy-relevant reviews on sensitive permissions). We find that although privacy reviews come from more than 200 countries, 33 countries provide 90% of privacy reviews. We conduct a comparison across countries by examining the distribution of privacy topics a country's users write about, and find that geographic proximity is not a reliable indicator that nearby countries have similar privacy perspectives. We uncover some countries with unique patterns and explore those herein. Surprisingly, we uncover that it is not uncommon for reviews that discuss privacy to be positive (32%); many users express pleasure about privacy features within apps or privacy-focused apps. We also uncover some unexpected behaviors, such as the use of reviews to deliver privacy disclaimers to developers. Finally, we demonstrate the value of analyzing app reviews with our approach as a complement to existing methods for understanding users' perspectives about privacy


Human-Centered Privacy Research in the Age of Large Language Models

Li, Tianshi, Das, Sauvik, Lee, Hao-Ping, Wang, Dakuo, Yao, Bingsheng, Zhang, Zhiping

arXiv.org Artificial Intelligence

The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns. To date, research on these privacy concerns has been model-centered: exploring how LLMs lead to privacy risks like memorization, or can be used to infer personal characteristics about people from their content. We argue that there is a need for more research focusing on the human aspect of these privacy issues: e.g., research on how design paradigms for LLMs affect users' disclosure behaviors, users' mental models and preferences for privacy controls, and the design of tools, systems, and artifacts that empower end-users to reclaim ownership over their personal data. To build usable, efficient, and privacy-friendly systems powered by these models with imperfect privacy properties, our goal is to initiate discussions to outline an agenda for conducting human-centered research on privacy issues in LLM-powered systems. This Special Interest Group (SIG) aims to bring together researchers with backgrounds in usable security and privacy, human-AI collaboration, NLP, or any other related domains to share their perspectives and experiences on this problem, to help our community establish a collective understanding of the challenges, research opportunities, research methods, and strategies to collaborate with researchers outside of HCI.


Trust model of privacy-concerned, emotionally-aware agents in a cooperative logistics problem

Carbo, J., Molina, J. M.

arXiv.org Artificial Intelligence

In this paper we propose a trust model to be used into a hypothetical mixed environment where humans and unmanned vehicles cooperate. We address the inclusion of emotions inside a trust model in a coherent way to the practical approaches to the current psychology theories. The most innovative contribution is how privacy issues play a role in the cooperation decisions of the emotional trust model. Both, emotions and trust have been cognitively modeled and managed with the Beliefs, Desires and Intentions (BDI) paradigm into autonomous agents implemented in GAML (the programming language of GAMA agent platform) that communicates using the IEEE FIPA standard. The trusting behaviour of these emotional agents is tested in a cooperative logistics problem where: agents have to move objects to destinations and some of the objects and places have privacy issues. The execution of simulations of this logistic problem shows how emotions and trust contribute to improve the performance of agents in terms of both, time savings and privacy protection


Privacy Issues in Large Language Models: A Survey

Neel, Seth, Chang, Peter

arXiv.org Artificial Intelligence

This is the first survey of the active area of AI research that focuses on privacy issues in Large Language Models (LLMs). Specifically, we focus on work that red-teams models to highlight privacy risks, attempts to build privacy into the training or inference process, enables efficient data deletion from trained models to comply with existing privacy regulations, and tries to mitigate copyright issues. Our focus is on summarizing technical research that develops algorithms, proves theorems, and runs empirical evaluations. While there is an extensive body of legal and policy work addressing these challenges from a different angle, that is not the focus of our survey. Nevertheless, these works, along with recent legal developments do inform how these technical problems are formalized, and so we discuss them briefly in Section 1. While we have made our best effort to include all the relevant work, due to the fast moving nature of this research we may have missed some recent work. If we have missed some of your work please contact us, as we will attempt to keep this survey relatively up to date. We are maintaining a repository with the list of papers covered in this survey and any relevant code that was publicly available at https://github.com/safr-ml-lab/survey-llm.


"It's a Fair Game'', or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents

Zhang, Zhiping, Jia, Michelle, Hao-Ping, null, Lee, null, Yao, Bingsheng, Das, Sauvik, Lerner, Ada, Wang, Dakuo, Li, Tianshi

arXiv.org Artificial Intelligence

The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigmatic shifts to protect the privacy of LLM-based CA users.


ChatGPT has privacy issues, 5 alternatives you can use

#artificialintelligence

ChatGPT has privacy issues: All eyes of the technology world are on Italy after the national data protection authority intervened and notified ChatGPT of its failure to provide a privacy policy. The Italian privacy regulator challenged OpenAI, the company that developed the most popular chatbot of the moment, for failing to clarify how it processes the personal data of the users it processes, as well as for not allowing minors under 13 years of age to access the service. There was an immediate controversy between those who consider it inappropriate to stop the development of artificial intelligence and those who, on the other hand, consider the authority's act inevitable since the GDPR rules leave no room for doubt. For this reason, although many Italian users have relied on VPNs to continue using ChatGPT, several other countries are considering following the Italian example and asking Open AI to solve their privacy shortcomings. In the US, they have now realised that the service in which Microsoft has invested $10 billion has a big problem with the processing of personal data of users it learns during training, while in Europe, there is the European Data Protection Board ready to take a position to evaluate and extend the Italian decision.


Amazon's iRobot purchase reportedly faces EU investigation

Engadget

American politicians may not be the only government figures concerned about Amazon's proposed acquisition of iRobot. The Financial Times sources claim European Union regulators are grilling Amazon ahead of a "likely" official investigation. The European Commission has sent questions about potential privacy issues, including Roomba robot vacuums' ability to capture imagery. Officials are worried Amazon might combine the pictures with Alexa data to gain a "competitive advantage," according to one source. MIT Technology Review recently discovered that photos taken by development versions of Roomba J7 vacuums had reached private Discord and Facebook groups. At the time, iRobot said the technology never made it to production models, was clearly labeled for testers and included a warning to remove "sensitive" items from the robovac's view.


Understanding Ethics, Privacy, and Regulations in Smart Video Surveillance for Public Safety

Ardabili, Babak Rahimi, Pazho, Armin Danesh, Noghre, Ghazal Alinezhad, Neff, Christopher, Ravindran, Arun, Tabkhi, Hamed

arXiv.org Artificial Intelligence

Recently, Smart Video Surveillance (SVS) systems have been receiving more attention among scholars and developers as a substitute for the current passive surveillance systems. These systems are used to make the policing and monitoring systems more efficient and improve public safety. However, the nature of these systems in monitoring the public's daily activities brings different ethical challenges. There are different approaches for addressing privacy issues in implementing the SVS. In this paper, we are focusing on the role of design considering ethical and privacy challenges in SVS. Reviewing four policy protection regulations that generate an overview of best practices for privacy protection, we argue that ethical and privacy concerns could be addressed through four lenses: algorithm, system, model, and data. As an case study, we describe our proposed system and illustrate how our system can create a baseline for designing a privacy perseverance system to deliver safety to society. We used several Artificial Intelligence algorithms, such as object detection, single and multi camera re-identification, action recognition, and anomaly detection, to provide a basic functional system. We also use cloud-native services to implement a smartphone application in order to deliver the outputs to the end users.